En esta página puede obtener un análisis detallado de una palabra o frase, producido utilizando la mejor tecnología de inteligencia artificial hasta la fecha:
Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of mind, or, as narrow AI, is focused on one narrow task. In John Searle's terms it “would be useful for testing hypotheses about minds, but would not actually be minds”. Weak artificial intelligence focuses on mimicking how humans perform basic actions such as remembering things, perceiving things, and solving simple problems. As opposed to strong AI, which uses technology to be able to think and learn on its own. Computers are able to use methods such as algorithms and prior knowledge to develop their own ways of thinking like human beings do. Strong artificial intelligence systems are learning how to run independently of the programmers who programmed them. Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe.
It is contrasted with Strong AI, which is defined variously as:
Scholars like Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (p. 85) (as, on the other hand, implied by the strong AI assumption).
AI can be classified as being “… limited to a single, narrowly defined task. Most modern AI systems would be classified in this category.” Narrow means the robot or computer is strictly limited to only being able to solve one problem at a time. Strong AI is conversely the opposite. Strong AI is as close to the human brain or mind as possible. This is all believed to be the case by philosopher John Searle. This idea of strong AI is also controversial, and Searle believes that the Turing test (created by Alan Turing during WW2, originally called the Imitation Game, used to test if a machine is as intelligent as a human) is not accurate or appropriate for testing strong AI.